31 research outputs found

    Parameter selection in lattice-based cryptography

    Get PDF

    On the concrete hardness of Learning with Errors

    Get PDF
    Abstract. The Learning with Errors (LWE) problem has become a central building block of modern cryptographic constructions. This work collects and presents hardness results for concrete instances of LWE. In particular, we discuss algorithms proposed in the literature and give the expected resources required to run them. We consider both generic instances of LWE as well as small secret variants. Since for several methods of solving LWE we require a lattice reduction step, we also review lattice reduction algorithms and propose a refined model for estimating their running times. We also give concrete estimates for various families of LWE instances, provide a Sage module for computing these estimates and highlight gaps in the knowledge about algorithms for solving the Learning with Errors problem.

    High-Precision Arithmetic in Homomorphic Encryption

    Get PDF
    In most RLWE-based homomorphic encryption schemes the native plaintext elements are polynomials in a ring Zt[x]/(xn+1)\mathbb{Z}_t[x]/(x^n+1), where nn is a power of 22, and tt an integer modulus. For performing integer or rational number arithmetic one typically uses an encoding scheme, which converts the inputs to polynomials, and allows the result of the homomorphic computation to be decoded to recover the result as an integer or rational number respectively. The problem is that the modulus tt often needs to be extremely large to prevent the plaintext polynomial coefficients from being reduced modulo~tt during the computation, which is a requirement for the decoding operation to work correctly. This results in larger noise growth, and prevents the evaluation of deep circuits, unless the encryption parameters are significantly increased. We combine a trick of Hoffstein and Silverman, where the modulus tt is replaced by a polynomial xbx-b, with the Fan-Vercauteren homomorphic encryption scheme. This yields a new scheme with a very convenient plaintext space Z/(bn+1)Z\mathbb{Z}/(b^n+1)\mathbb{Z}. We then show how rational numbers can be encoded as elements of this plaintext space, enabling homomorphic evaluation of deep circuits with high-precision rational number inputs. We perform a fair and detailed comparison to the Fan-Vercauteren scheme with the Non-Adjacent Form encoder, and find that the new scheme significantly outperforms this approach. For example, when the new scheme allows us to evaluate circuits of depth 99 with 3232-bit integer inputs, in the same parameter setting the Fan-Vercauteren scheme only allows us to go up to depth 22. We conclude by discussing how known applications can benefit from the new scheme

    Discretisation and Product Distributions in Ring-LWE

    Get PDF
    A statistical framework applicable to Ring-LWE was outlined by Murphy and Player (IACR eprint 2019/452). Its applicability was demonstrated with an analysis of the decryption failure probability for degree-1 and degree-2 ciphertexts in the homomorphic encryption scheme of Lyubashevsky, Peikert and Regev (IACR eprint 2013/293). In this paper, we clarify and extend results presented by Murphy and Player. Firstly, we make precise the approximation of the discretisation of a Normal random variable as a Normal random variable, as used in the encryption process of Lyubashevsky, Peikert and Regev. Secondly, we show how to extend the analysis given by Murphy and Player to degree-k ciphertexts, by precisely characterising the distribution of the noise in these ciphertexts

    On the Feasibility and Impact of Standardising Sparse-secret LWE Parameter Sets for Homomorphic Encryption

    Get PDF
    In November 2018, the HomomorphicEncryption.org consortium published the Homomorphic Encryption Security Standard. The Standard recommends several sets of Learning with Errors (LWE) parameters that can be selected by application developers to achieve a target security level λ{128,192,256} \lambda \in \{128,192,256\} . These parameter sets all involve a power-of-two dimension n215 n \leq 2^{15} , an error distribution of standard deviation σ3.19 \sigma \approx 3.19 , and a secret whose coefficients are either chosen uniformly in Zq Z_q , chosen according to the error distribution, or chosen uniformly in {1,0,1} \{ -1, 0, 1\} . These parameter sets do not necessarily reflect implementation choices in the most commonly used homomorphic encryption libraries. For example, several libraries support dimensions that are not a power of two. Moreover, all known implementations for bootstrapping for the CKKS, BFV and BGV schemes use a sparse secret and a large ring dimension such as n{216,217} n \in \{ 2^{16}, 2^{17} \} , and advanced applications such as logistic regression have used equally large dimensions. This motivates the community to consider widening the recommended parameter sets, and the purpose of this paper is to investigate such possible extensions. We explore the security of possible sparse-secret LWE parameter sets, taking into account hybrid attacks, which are often the most competitive in the sparse-secret regime. We present a conservative analysis of the hybrid decoding and hybrid dual attacks for parameter sets of varying sparsity, with the goal of balancing security requirements with bootstrapping efficiency. We also show how the methodology in the Standard can be easily adapted to support parameter sets with power-of-two dimension n216 n \geq 2^{16} . We conclude with a number of discussion points to motivate future improvements to the Standard

    Optimizations and Trade-offs for HElib

    Get PDF
    In this work, we investigate the BGV scheme as implemented in HElib. We begin by performing an implementation-specific noise analysis of BGV. This allows us to derive much tighter bounds than what was previously done. To confirm this, we compare our bounds against the state of the art. We find that, while our bounds are at most 1.81.8 bits off the experimentally observed values, they are as much as 2929 bits tighter than previous work. Finally, to illustrate the importance of our results, we propose new and optimised parameters for HElib. In HElib, the special modulus is chosen to be kk times larger than the current ciphertext modulus QiQ_i. For a ratio of subsequent ciphertext moduli log(QiQi1)=54\log\left( \frac{Q_i}{Qi−1}\right) = 54 (a very common choice in HElib), we can optimise kk by up to 2626 bits. This means that we can either enable more multiplications without having to switch to larger parameters, or reduce the size of the evaluation keys, thus reducing on communication costs in relevant applications. We argue that our results are near-optimal

    Verifying Classic McEliece: examining the role of formal methods in post-quantum cryptography standardisation

    Get PDF
    Developers of computer-aided cryptographic tools are optimistic that formal methods will become a vital part of developing new cryptographic systems. We study the use of such tools to specify and verify the implementation of Classic McEliece, one of the code-based cryptography candidates in the fourth round of the NIST Post-Quantum standardisation Process. From our case study we draw conclusions about the practical applicability of these methods to the development of novel cryptography
    corecore